Skip to content
This repository was archived by the owner on Dec 18, 2018. It is now read-only.

Conversation

@BrennanConroy
Copy link
Member

             Method |  Input | HubProtocol |        Mean |     Error |    StdDev |        Op/s |  Gen 0 | Allocated |
------------------- |------- |------------ |------------:|----------:|----------:|------------:|-------:|----------:|
  ReadSingleMessage |  Large |        Json | 18,124.9 ns | 377.49 ns | 565.01 ns |    55,172.8 | 0.1831 |   17321 B |
 WriteSingleMessage |  Large |        Json | 11,894.0 ns | 289.01 ns | 432.57 ns |    84,076.0 | 0.1968 |   18248 B |
  ReadSingleMessage |  Large |     MsgPack |  4,630.1 ns | 167.94 ns | 251.36 ns |   215,979.3 | 0.0603 |    5920 B |
 WriteSingleMessage |  Large |     MsgPack |  5,752.0 ns | 485.18 ns | 726.19 ns |   173,852.7 | 0.1053 |    9656 B |
  ReadSingleMessage | Medium |        Json | 12,367.9 ns | 519.64 ns | 777.77 ns |    80,854.2 | 0.0870 |    9504 B |
 WriteSingleMessage | Medium |        Json |  5,507.0 ns | 265.29 ns | 397.08 ns |   181,587.1 | 0.0763 |    7272 B |
  ReadSingleMessage | Medium |     MsgPack |  2,362.9 ns | 116.92 ns | 175.00 ns |   423,207.7 | 0.0076 |     992 B |
 WriteSingleMessage | Medium |     MsgPack |  2,076.2 ns | 109.59 ns | 164.04 ns |   481,655.4 | 0.0114 |    1136 B |
  ReadSingleMessage |  Small |        Json |  4,654.5 ns | 207.23 ns | 310.17 ns |   214,845.5 | 0.0740 |    6888 B |
 WriteSingleMessage |  Small |        Json |  2,055.4 ns |  78.02 ns | 116.77 ns |   486,530.6 | 0.0687 |    6464 B |
  ReadSingleMessage |  Small |     MsgPack |    441.5 ns |  21.56 ns |  32.27 ns | 2,265,137.4 | 0.0048 |     448 B |
 WriteSingleMessage |  Small |     MsgPack |    553.9 ns |  13.63 ns |  20.39 ns | 1,805,414.2 | 0.0086 |     832 B |

@BrennanConroy BrennanConroy changed the title HubProtocol micro benchmarks More micro benchmarks Nov 28, 2017
@BrennanConroy
Copy link
Member Author

Added another micro benchmark for broadcasting to many clients using DefaultHubLifetimeManager
Also, benchmarks now do a validation run during ./build.cmd to make sure we didn't break them.

         Method | Connections |       Mean |      Error |     StdDev |      Op/s |  Gen 0 | Allocated |
--------------- |------------ |-----------:|-----------:|-----------:|----------:|-------:|----------:|
 InvokeAsyncAll |           1 |   1.367 us |  0.1058 us |  0.1583 us | 731,520.7 | 0.0002 |     502 B |
 InvokeAsyncAll |          10 |   3.205 us |  0.3724 us |  0.5574 us | 311,993.2 |      - |    1064 B |
 InvokeAsyncAll |        1000 | 332.336 us | 32.7849 us | 49.0709 us |   3,009.0 |      - |   62865 B |

@analogrelay
Copy link
Contributor

Hmm, wonder if we should put the connection count in as the Operation count for each of those iterations. The "Op/s" value is kinda distractingly unrelated since there are inherently more operations performed in each InvokeAsync when there are more connections. I'm not sure you can bind a parameter to the operation count though since it's an attribute.

@BrennanConroy
Copy link
Member Author

We should be aware of what Op/s means here, we can add a comment to the benchmark at a minimum
"Op/s is measuring the number of broadcasts the server can perform with the specified number of clients"

Copy link
Contributor

@analogrelay analogrelay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good but I already want more ;P

private TestBinder _binder;
private HubMessage _hubMessage;

[Params(Message.Small, Message.Medium, Message.Large)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use descriptive names rather than Small/Medium/Large. Also, there are kinda two axes here: Number of payload values, size of payload values. We should see if we can cross-product them a little (e.g. a test that serializes a single 4K string, and a test that serializes 4K 1 byte strings). I think aiming for 2-3 "expected scenarios" (e.g. 3-4 small values, 8-10 small values, 1 very large value) and then a couple "extreme" scenarios (lots of small values, one extraordinarily large value) would be good. Assuming that doesn't balloon the run time too much :).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would balloon it like crazy, I think it already takes 13 minutes for this benchmark

switch (Input)
{
case Message.Small:
_hubMessage = new CancelInvocationMessage("123");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use the same message type for all of these tests. If that makes "Small" bigger, so be it.

@analogrelay
Copy link
Contributor

What we're most interested in is how well it scales. Is it linear per connection (ideal) or is there some multiplier effect as more connections arrive. As long we understand clearly how to find that data, I don't mind. Ideally, Op/s would be the value we expect to scale with connections, but it may be difficult to make the infrastructure support that.

@BrennanConroy
Copy link
Member Author

BrennanConroy commented Nov 28, 2017

So ignoring 1 connection for now, 10 connections was ~300k Op/s, 1000 connections was ~3k Op/s, not shown but done during testing 100 connections was ~30k Op/s
Looks fairly linear

@analogrelay
Copy link
Contributor

Nice

@BrennanConroy BrennanConroy mentioned this pull request Nov 28, 2017
6 tasks
Copy link
Contributor

@analogrelay analogrelay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reach for the moon @BrennanConroy ! Bigger payload! Otherwise good to go (and if that makes the benchmark super slow, just make it a power of two and merge as-is)

_hubMessage = new InvocationMessage("123", true, "Target", null, 1, "string", 2.0f, true, (byte)9, new byte[] { 5, 4, 3, 2, 1 }, 'c', 123456789101112L);
break;
case Message.LargeArguments:
_hubMessage = new InvocationMessage("123", true, "Target", null, new string('F', 1234), new byte[1000]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

larger Larger

LARGER

But seriously, let's try 10 KB (10240) and see how it performs, if the benchmark is super slow, then scale it back. Also use a power of two, not a power of ten, jeez.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Psh power of two is too clean, computers like those, we don't want to be nice to the computer

@BrennanConroy BrennanConroy merged commit 531c7cf into dev Nov 28, 2017
@BrennanConroy BrennanConroy deleted the brecon/bench branch November 28, 2017 23:40
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants